327 research outputs found

    Adapting End-to-End Speech Recognition for Readable Subtitles

    Full text link
    Automatic speech recognition (ASR) systems are primarily evaluated on transcription accuracy. However, in some use cases such as subtitling, verbatim transcription would reduce output readability given limited screen size and reading time. Therefore, this work focuses on ASR with output compression, a task challenging for supervised approaches due to the scarcity of training data. We first investigate a cascaded system, where an unsupervised compression model is used to post-edit the transcribed speech. We then compare several methods of end-to-end speech recognition under output length constraints. The experiments show that with limited data far less than needed for training a model from scratch, we can adapt a Transformer-based ASR model to incorporate both transcription and compression capabilities. Furthermore, the best performance in terms of WER and ROUGE scores is achieved by explicitly modeling the length constraints within the end-to-end ASR system.Comment: IWSLT 202

    Low-Latency Sequence-to-Sequence Speech Recognition and Translation by Partial Hypothesis Selection

    Full text link
    Encoder-decoder models provide a generic architecture for sequence-to-sequence tasks such as speech recognition and translation. While offline systems are often evaluated on quality metrics like word error rates (WER) and BLEU, latency is also a crucial factor in many practical use-cases. We propose three latency reduction techniques for chunk-based incremental inference and evaluate their efficiency in terms of accuracy-latency trade-off. On the 300-hour How2 dataset, we reduce latency by 83% to 0.8 second by sacrificing 1% WER (6% rel.) compared to offline transcription. Although our experiments use the Transformer, the hypothesis selection strategies are applicable to other encoder-decoder models. To avoid expensive re-computation, we use a unidirectionally-attending encoder. After an adaptation procedure to partial sequences, the unidirectional model performs on-par with the original model. We further show that our approach is also applicable to low-latency speech translation. On How2 English-Portuguese speech translation, we reduce latency to 0.7 second (-84% rel.) while incurring a loss of 2.4 BLEU points (5% rel.) compared to the offline system

    Water, Waste, and Disease: Struggles of Chinese communities and environmental racism in California, 1870-1910

    Get PDF
    This study takes a fresh look at the anti-Chinese movement in California in the late 19th century through the lens of environmental humanities, with a focus on environmental justice and environmental racism. The dissertation examines the complex role that water played in the history of Chinese immigration to the United States in the latter half of the 19th century and early 20th century. This includes various forms of water, such as sewage and different methods of controlling and managing water resources. Drawing on contemporary understandings of the connections between water, pollution, and disease, this research shows how built environments contributed to environmental injustice against Chinese communities

    The time course of spatial attention shifts in elementary arithmetic

    Get PDF

    Learning an Artificial Language for Knowledge-Sharing in Multilingual Translation

    Get PDF
    The cornerstone of multilingual neural translation is shared representations across languages. Given the theoretically infinite representation power of neural networks, semantically identical sentences are likely represented differently. While representing sentences in the continuous latent space ensures expressiveness, it introduces the risk of capturing of irrelevant features which hinders the learning of a common representation. In this work, we discretize the encoder output latent space of multilingual models by assigning encoder states to entries in a codebook, which in effect represents source sentences in a new artificial language. This discretization process not only offers a new way to interpret the otherwise black-box model representations, but, more importantly, gives potential for increasing robustness in unseen testing conditions. We validate our approach on large-scale experiments with realistic data volumes and domains. When tested in zero-shot conditions, our approach is competitive with two strong alternatives from the literature. We also use the learned artificial language to analyze model behavior, and discover that using a similar bridge language increases knowledge-sharing among the remaining languages

    Maastricht University’s Multilingual Speech Translation System for IWSLT 2021

    Get PDF

    Temporal dynamics of neural activity in the olfactory receptor neurons during odor exposure

    Get PDF

    Unsupervised Machine Translation On Dravidian Languages

    Get PDF
    Unsupervised neural machine translation (UNMT) is beneficial especially for low resource languages such as those from the Dravidian family. However, UNMT systems tend to fail in realistic scenarios involving actual low resource languages. Recent works propose to utilize auxiliary parallel data and have achieved state-of-the-art results. In this work, we focus on unsupervised translation between English and Kannada, a low resource Dravidian language. We additionally utilize a limited amount of auxiliary data between English and other related Dravidian languages. We show that unifying the writing systems is essential in unsupervised translation between the Dravidian languages. We explore several model architectures that use the auxiliary data in order to maximize knowledge sharing and enable UNMT for distant language pairs. Our experiments demonstrate that it is crucial to include auxiliary languages that are similar to our focal language, Kannada. Furthermore, we propose a metric to measure language similarity and show that it serves as a good indicator for selecting the auxiliary languages

    More on Rainbow Cliques in Edge-Colored Graphs

    Full text link
    In an edge-colored graph GG, a rainbow clique KkK_k is a kk-complete subgraph in which all the edges have distinct colors. Let e(G)e(G) and c(G)c(G) be the number of edges and colors in GG, respectively. In this paper, we show that for any ε>0\varepsilon>0, if e(G)+c(G)≥(1+k−3k−2+2ε)(n2)e(G)+c(G) \geq (1+\frac{k-3}{k-2}+2\varepsilon) {n\choose 2} and k≥3k\geq 3, then for sufficiently large nn, the number of rainbow cliques KkK_k in GG is Ω(nk)\Omega(n^k). We also characterize the extremal graphs GG without a rainbow clique KkK_k, for k=4,5k=4,5, when e(G)+c(G)e(G)+c(G) is maximum. Our results not only address existing questions but also complete the findings of Ehard and Mohr (Ehard and Mohr, Rainbow triangles and cliques in edge-colored graphs. {\it European Journal of Combinatorics, 84:103037,2020}).Comment: 16page
    • …
    corecore